show_chest_xray()
show_cancer_cells()
show_miniMIT()
show_kvasir_v2()
300 epochs with an early stopping callback fixed at 10Adam with learning rate 1e-6
Looks like we are mainly predicting chest with pneumonia/normal rather than pneumonia/normal !
pre-trained models, even on larger data sets, are overfitted
If the categories for our task of interest exist in the original data set used to train the ConvNet, our model can be qualified as strong because it can motivates its output .
On the opposite side, our model is weak even with excellent metric results.
Xception network by François Chollet (creator of Keras)
Depthwise convolution decrease the computing time with nearly the same result as a normal convolution
Based on VGG-19 first 3 blocks, we construct 2 blocks of Depthwise convolution followed by a normalization to prevent overfitting
We used 4 data sets of various properties to chose when, how and which Transfer Learning scenarios to use for a image classification project
The most important ones are the size of the data set and its similarity to the original data set
If data set is large, fine-tuning of a pre-trained network is probably the best idea
If the data data is small, the best idea might be to train a linear classifier on the fully connected layers extracted from a pre-trained network.
In case the data set you use is very similar to the ImageNet data set, you should just use the best pre-trained network available nowdays
We found that even an excellent neural network in regards of metrics can be useless if its output isn't relevant for humans. This is why we strongly encourage to always check what a neural network predicts instead of looking for the best metric results